Statistical Cooling: a General Approach to Combinatorial Optimization Problems
نویسندگان
چکیده
Statistical cooling is a new combinatorial optimization technique based on Monte-Carlo iterative improvement. The method originates from the analogy between the annealing of a solid as described by the theory of statistical physics and the optimization of a system with many degrees of freedom. In the present paper we present a general theoretical framework for the description of the statistical cooling algorithm based on concepts from the theory of Markov chains. A cooling schedule is presented by which near-optimal results can be obtained within polynomial time. The performance of the algorithm is discussed by means of a special class of traveling salesman problems. Math. Rev.: 05,49,60,82. 1. Introduetion During the last decade much effort has been invested in the subject of combinatorial optimization 1). The present-day interest in this field originates, to a large extent, from the need to solve the many combinatorial optimization problems related to computer science and VLSI design 2-4). Examples include the traveling salesman problem, oneand two-dimensional cell arrangement, macro-placement, global routing and logic minimization. The study of combinatorial optimization focusses on finding techniques to optimize a function of a finite, possibly large, number of variables. This so-called cost function is a quantitative measure of the 'goodness' (with respect to some critical quantities) of the system that is to be optimized and it depends on details of the internal configurations of the system. Many combinatorial optimization problems have been proven intractable and belong to the class of NP-complete problems, Le. no algorithm is known that gives an exact solution to the p;oblem within polynomial time (the com. putation time increases at lest exponentially with the complexity of the problem). If, furthermore, such an algorithm were found, it could be transformed to solve all the problems of the corresponding class in polynomial time. Garey Phlllps Journalof Research Vol. 40 No. 4 1985 193 E. H. L. Aarts and P. J.M. van Laarhoven and Johnson extensively studied this type of combinatorial optimization problems and provided a useful classification of the various problems 2). As a result of their intractability, problems with a large complexity cannot be solved exactlywithin a realistic amount of computation time. Less time-consuming algorithms can be constructed by applying heuristic methods striving for near-optimal solutions. A vast number of different heuristic algorithms has been reported in the literature 1-3,5,6). The heuristics applied in these algorithms is in most cases strongly problem dependent. This is considered as a major drawback since it prohibits flexible implementation and general application. Statistical cooling is a new heuristic optimization technique that is generally applicable, easy to implement and its optimization results can be as good as desired. The algorithm is based on Monte-Carlo methods and in a way it may be considered as a special form of iterative improvement 6) (see also secs 2 and 3). The technique originates from computer simulation methods used in condensed matter physics and was first employed successfully for combinatorial optimization problems by Kirkpatrick et al. 7). It, ever since, has found great acceptance and has been applied to a number of problems in various disciplines 8). The work of Kirkpatrick is strongly based on the analogy between the annealing of a solid and the optimization of a system with many independent variables. Solids are annealed by raising the temperature to a (maximal) value for which the particles randomly arrange in the liquid phase, followed by cooling to force the particles into the low energy states of a regular lattice. At high temperatures all possible states can be reached, although low energy states are occupied with a larger probability; lowering the temperature decreases the number of accessible states and the system finally will be frozen into its ground state, provided the maximum temperature is high enough and the cooling is sufficiently slow. In combinatorial optimization a similar situation occurs: the system may occur in maJ?-ydifferent configurations. Any configuration has a cost that is given by the value of the cost function for that particular configuration. Similar to the simulation of the annealing of solids, one can statistically model the evolution of the system that has to be optimized into a state that corresponds with the minimum value of the cost function. So far the literature provides a number of papers that deal predominantly with applications of the statistical cooling algorithm to various combinatorial optimization problems 7-18) and in most cases the description of the algorithm starts off from the physical analogy formulated by Kirkpatrick et al. 7). We felt that a more formal theoretical description and analysis of the algorithm would be desirable. 194 Philip, Journalof Research Vol.40 No. 4 1985 A general approach to' combinatorial optimization problems The intention of the present paper is to provide a theoretical formalism and an analysis of the convergence of the statistical cooling algorithm. The formulation starts off with an introductory section on iterative improvement (sec. 2). Section 3 presents a mathematical formalism for the statistical cooling algorithm based on the theory of Markov chains 19). The convergence of the algorithm is discussed in sec. 4. In this section a number of formal concepts and additional functions are introduced that are useful for the understanding of the convergence behaviour of the algorithm. In sec. 5 we present an analysis and a discussion of the results obtained in sec. 4 by applying the statistical cooling algorithm to a special class of traveling salesman problems. In sec. 6 the convergence of the algorithm is re-examined. Here we introduce a number of heuristic approximations by which faster convergence of the algorithm is achieved. In this section we, furthermore, present a new cooling schedule for the statistical cooling algorithm with emphasis on feasibility and generality. The results of sec. 6 are illustrated and discussed in sec. 7 by applying the algorithm with the new cooling schedule to the traveling salesman problems of sec. 5. A summary of the algorithm is presented in sec. 8. The paper ends with some conclusions and remarks. 2. Iterative improvement The problem of finding near-optimal solutions for a given combinatorial optimization problem is solved by optimizing heuristically the corresponding cost function. The cost function is a function of many variables and represents a quantitative measure of the 'goodness', with respect to certain criteria, of any internal configuration (or state) of the system that has to be optimized. A system configuration is represented by a state vector re E'/l whose components uniquely determine the given configuration. The configuration space E'/l is given by the finite set of all possible system configurations. We, furthermore, introduce the set IlYl = [1, 2, ... , I &tI J as the set of state labels of the system configurations contained in E'/l. The cost function C(r;), with C: &t ~ IRand i E I1Yl, assigns a real number to each state. Here, the cost function is defined in such a way that the lower the value of C(r;) the better is the configuration represented by the corresponding state vector. The solution of the optimization problem is given by a configuration that corresponds with a state vector obtained by minimizing the cost function. Iterative improvement 6) is the best known heuristic optimization method for combinatorial optimization problems. The algorithm starts off with a given state, i, with state vector ri and initial cost C(r;). Next, by rearranging the system a new configuration,j, is generated with corresponding state vector rj and cost C(rj). The difference in cost, given by Philips Journalof Research Vol.40 No.4 1985 195 E. H. L. Aarts and P. J. M. van Laarhoven (1) deterrnines whether the new configuration is accepted or not, i.e. for negative values of 11Cij the cost decreases and the new configuration is accepted, whereas for positive values the cost increases and the new configuration is rejected. This procedure is then repeated until no further improvement is achieved. The inherent limitation of the iterative improvement algorithm, when employing relatively simple rearrangeinents, is that it usually gets stuck in a local minimumr To avoid this problem some tricks must be applied to assure that the final solution is close to a global minimum. If the algorithm gets stuck one may resort to more complicated rearrangements 6). The process of choosing the appropriate complex rearrangements is strongly determined by the nature of the problem and requires expert knowledge. Many combinatorial optimization problems in the field of computer science and VLSI design, however, require fast and flexible implementation. Optimization methods that incorporate complex system rearrangements and problem dependent manipulations do not meet these requirements. 3. Statistical cooling A more general and flexible heuristic optimization method is given by the statistical cooling technique, also called simulated annealing 7) or MonteCarlo annealing 10), which can be regarded as a Monte-Carlo iterative improvement method. This method accepts in a limited way deteriorations of the system, i.e. configurations that correspond with an increase in the cost function. Similar to the iterative improvement algorithm the statistical cooling algorithm starts off with a given initial configuration and generates a sequence of new system configurations by rearranging the system. New configurations are accepted according to an acceptation criterion which allows also for deteriorations of the system. Initially the acceptation criterion is taken such that system deteriorations are accepted with large probabilities. As the optimization process proceeds the acceptation criterion is modified in such a way that the probability for accepting system deteriorations becomes smaller and smaller, and at the end of the optimization process this probability approaches zero. In this way the optimization process is prevented from getting stuck in local minima, thus enabling it to arrive at a global minimum, employing only simple rearrangements of the system. The decrement in the probability for accepting system deteriorations is governed by a so-called cooling control parameter. It is possible to decrease this probability during the optimization process after each system rearrangement. Here we use an approach in which the probability in between decrements is kept constant for a number of system rearrange196 Philip. Journolof Research Vol.40 No.4 1985 A general approach to combinatorial optimization problems ments. The advantage of this approach is that the optimization process can be described in terms of a finite set of homogeneous Markov chains 19) for which the limit behaviour is more easy to describe than for inhomogeneous Markov chains 19). This results in a more transparent description of the statistical cooling algorithm. A Markov chain is a sequence of states for which the probability of occurrence is determined by the transition probability from state ito state j defined as TIj(fJ) = Bij(fJ) Pij if i =1= j (2)
منابع مشابه
A hybrid metaheuristic using fuzzy greedy search operator for combinatorial optimization with specific reference to the travelling salesman problem
We describe a hybrid meta-heuristic algorithm for combinatorial optimization problems with a specific reference to the travelling salesman problem (TSP). The method is a combination of a genetic algorithm (GA) and greedy randomized adaptive search procedure (GRASP). A new adaptive fuzzy a greedy search operator is developed for this hybrid method. Computational experiments using a wide range of...
متن کاملSimulated Annealing with Advanced Adaptive Neighborhood
It was Kirkpatrick et al. who first proposed simulated annealing, SA, as a method for solving combinatorial optimization problems[1]. It is reported that SA is very useful for several types of combinatorial optimization problems[2]. The advantages and the disadvantages of SA are well summarized in [3]. The most remarkable disadvantages are that it needs a lot of time to find the optimum solutio...
متن کاملAn optimization technique for vendor selection with quantity discounts using Genetic Algorithm
Vendor selection decisions are complicated by the fact that various conflicting multi-objective factors must be considered in the decision making process. The problem of vendor selection becomes still more compli-cated with the inclusion of incremental discount pricing schedule. Such hard combinatorial problems when solved using meta heuristics produce near optimal solutions. This paper propose...
متن کاملCombining Hybrid Metaheuristics and Populations for the Multiobjective Optimisation of Space Allocation Problems
Some recent successful techniques to solve multiobjective optimisation problems are based on variants of evolutionary algorithms and use recombination and self-adaptation to evolve the population. We present an approach that incorporates a population of solutions into a hybrid metaheuristic with no recombination. The population is evolved using self-adaptation, a mutation operator and an inform...
متن کاملVisualization and Interactive Steering of Simulated Annealing
Combinatorial optimization problems involve the selection of an arrangement of discrete objects (a state) from a discrete, nite space of mutually exclusive states. The goal is to select the particular state that represents the extremization (maximization or minimization) of an objective function. Often, these problems are simple to state, but quite diicult to solve. Exact solution methods, whil...
متن کامل